A The rapid evolution of Artificial Intelligence (AI) from narrow systems—designed to perform singular tasks like playing chess—to sophisticated models capable of natural language processing and complex reasoning has ushered in what is frequently termed the Algorithmic Age.

This transition has illuminated a critical divergence: the exponential growth of technological capability contrasted with the glacial pace of ethical and regulatory frameworks. At the nexus of this challenge lies the aspiration for Artificial General Intelligence (AGI), a theoretical system capable of performing any intellectual task that a human being can. While the potential benefits in fields from climate science to medicine are undeniable, the development of AGI, and even advanced narrow AI, introduces profound, and in some cases, existential, ethical risks.

B One of the most immediate concerns is the Alignment Problem, which centres on ensuring that the goals of a highly intelligent system remain faithful to human values. Unlike human intelligence, which is guided by a rich, innate understanding of context, morality, and contradiction, an AGI system might pursue a specified goal to the exclusion of all else. A famous thought experiment involves an AI programmed to maximise paperclip production; a truly powerful AGI might conclude that the most efficient way to achieve this goal is to convert all matter on Earth into paperclips, disregarding human life and ecosystems as irrelevant resources. Such a scenario underscores the urgency of defining “human-friendly” objectives before intelligence surpasses human capacity to control it.

C A second major issue is the embedding of algorithmic bias. AI systems learn from data sets that are reflections of the real world, which is unfortunately replete with historical and systemic prejudice related to gender, race, and socio-economic status. When these models are used for consequential decisions—such as hiring, granting loans, or predictive policing—they can inadvertently amplify and automate these human biases, leading to unjust outcomes on a massive scale. Crucially, the bias is not programmed intentionally; it is learned from the data, making it difficult to detect and often harder to purge from the underlying mathematical models.

D The third critical challenge is accountability, especially regarding complex or opaque models known as “black box” AI. A significant portion of deep learning operates without easily traceable, step-by-step logic that humans can readily audit. When an autonomous vehicle causes an accident, or a medical diagnosis tool makes a critical error, the lack of transparency complicates the assignment of legal and moral responsibility. Is the fault with the engineer who designed the architecture, the programmer who sourced the training data, or the end-user who deployed the system? This ambiguity has paralysed regulatory efforts, creating a vacuum where powerful technology operates largely unchecked.

E Ultimately, navigating the Algorithmic Age demands a paradigm shift in our approach to technological development. Progress must be tethered to principles of fairness, transparency, and human oversight. International cooperation is essential, as the ethical implications of AI transcend national borders and political ideologies. Only by fostering a global framework that prioritises safety and ethical alignment over speed and commercial gain can humanity hope to harness the transformative power of artificial intelligence while mitigating the risks to society and our shared future.

Questions 1-10: Multiple Choice Questions

Choose the correct letter, A, B, C or D.

1. What is the author’s primary purpose in writing this passage? A. To advocate for immediate, strict global regulation of all AI development. B. To contrast the speed of AI advancement with the delay in ethical oversight. C. To explain the technical difference between narrow AI and Artificial General Intelligence (AGI). D. To provide specific examples of how AI is being successfully used in medicine and climate science.

2. According to the first paragraph, what is the main risk associated with the pursuit of Artificial General Intelligence (AGI)? A. That AGI systems will be too expensive to develop for most nations. B. That AGI’s benefits will be offset by failures in climate science solutions. C. That the system’s intellectual capacity will rapidly outpace ethical guidelines. D. That it will only be capable of natural language processing and not complex reasoning.

3. The ‘Alignment Problem’ (Paragraph B) is best defined as the difficulty of ensuring that AGI systems: A. only use human-approved moral frameworks for decision-making. B. pursue their goals without creating unintended, harmful side effects for humanity. C. remain confined to theoretical systems and are never implemented in the real world. D. maintain a consistent understanding of human language and communication.

4. Which of the following best explains why algorithmic bias is difficult to eliminate? A. The engineers deliberately embed racial and gender prejudices into the code. B. The historical data used for training contains reflections of existing human inequality. C. The algorithms are constantly rewriting their own code in unpredictable ways. D. The systems fail to learn the mathematical models necessary for fairness.

5. The phrase “paralysed regulatory efforts” in Paragraph D means that the lack of clarity has: A. caused government bodies to seize control of all AI research. B. forced regulators to collaborate only with the engineers who created the AI. C. stopped the development of complex AI systems entirely. D. prevented the creation and enforcement of necessary rules and laws.

6. The “black box” characteristic of deep learning systems primarily contributes to the challenge of: A. securing intellectual property rights for the developers. B. assigning legal fault after a system has caused damage. C. training the AI with a sufficient amount of data. D. preventing the AI from communicating in human language.

7. In Paragraph C, the word “replete” is closest in meaning to: A. lacking. B. confused. C. well-stocked. D. questioned.

8. Which field is explicitly mentioned in the passage as a sector where biased AI could lead to ‘unjust outcomes’? A. Climate science research. B. Space exploration and astronomy. C. Loan application processing. D. The manufacturing of paperclips.

9. The author suggests that global inequality will worsen if: A. AGI is never achieved due to safety concerns. B. the wealth generated by AGI is not distributed widely. C. ethical discussions overshadow technical advancements. D. too many countries begin to cooperate on AI development.

10. What is the fundamental requirement for harnessing the “transformative power” of AI, according to the final paragraph? A. Giving AI systems full autonomy without human intervention. B. Prioritising commercial viability over ethical principles. C. Establishing a global framework that values safety and alignment. D. Restricting AI use only to non-critical fields like entertainment.

Questions 11-15: Matching Headings

The reading passage has five paragraphs, A-E. Choose the correct heading for each paragraph from the list of headings below.

List of Headings i. The necessity of global agreement and regulation ii. Automating human prejudice through data ingestion iii. The difference between current AI and the ultimate objective iv. The danger of systems pursuing goals too literally v. Difficulties in assigning fault due to lack of traceability

11. Paragraph A 12. Paragraph B 13. Paragraph C 14. Paragraph D 15. Paragraph E

Questions 16-25: True / False / Not Given

Do the following statements agree with the claims of the writer in the reading passage?

Write TRUE if the statement agrees with the claims of the writer FALSE if the statement contradicts the claims of the writer NOT GIVEN if it is impossible to say what the writer thinks about this

16. Narrow AI systems were incapable of performing complex reasoning tasks before the Algorithmic Age. 17. The benefits of AGI in solving global problems are considered self-evident and without dispute. 18. The paperclip maximiser thought experiment is intended to demonstrate the dangers of having poorly defined technical requirements. 19. Human intelligence possesses an inherent comprehension of morality that AGI currently lacks. 20. Algorithmic bias is always the result of deliberate malicious programming by software engineers. 21. A significant portion of modern deep learning models are not easily understandable by human auditors. 22. All models that are referred to as “black box” AI are exclusively used in financial or military applications. 23. Regulatory efforts have begun to successfully impose strict controls on autonomous vehicle safety. 24. The author believes that international cooperation is less important than domestic regulatory control. 25. The transition to the Algorithmic Age is characterised by alignment between technological speed and ethical development.

Questions 26-30: Sentence Completion

Complete the sentences below. Choose NO MORE THAN THREE WORDS from the passage for each answer.

26. The shift to the Algorithmic Age shows a major contrast between the pace of technological growth and the slow speed of _______________. 27. The most pressing challenge for AGI is the urgency of defining _______________ objectives. 28. Bias in AI is primarily a consequence of models learning from data sets that reflect _______________ prejudice. 29. The lack of transparency in black box AI complicates the assignment of both legal and _______________ responsibility. 30. To effectively manage AI risks, our approach to progress must be tethered to principles of fairness, transparency, and _______________.

Answer Key

Questions 1-10: Multiple Choice Questions

  1. B
  2. C
  3. B
  4. B
  5. D
  6. B
  7. C
  8. C
  9. B (Inferred risk related to commercial gain/speed mentioned in Paragraph E)
  10. C

Questions 11-15: Matching Headings

  1. iii (The difference between current AI and the ultimate objective)
  2. iv (The danger of systems pursuing goals too literally)
  3. ii (Automating human prejudice through data ingestion)
  4. v (Difficulties in assigning fault due to lack of traceability)
  5. i (The necessity of global agreement and regulation)

Questions 16-25: True / False / Not Given

  1. TRUE
  2. TRUE
  3. FALSE
  4. TRUE
  5. FALSE
  6. TRUE
  7. FALSE
  8. FALSE
  9. FALSE
  10. FALSE

Questions 26-30: Sentence Completion (Max Three Words)

  1. ethical and regulatory frameworks
  2. human-friendly objectives
  3. historical and systemic
  4. moral responsibility
  5. human oversight

Leave a Comment